Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available December 31, 2026
-
Free, publicly-accessible full text available December 1, 2026
-
Artificial intelligence (AI) supported network traffic classification (NTC) has been developed lately for network measurement and quality-of-service (QoS) purposes. More recently, federated learning (FL) approach has been promoted for distributed NTC development due to its nature of unshared dataset for better privacy and confidentiality in raw networking data collection and sharing. However, network measurement still require invasive probes and constant traffic monitoring. In this paper, we propose a non-invasive network traffic estimation and user profiling mechanism by leveraging label inference of FL-based NTC. In specific, the proposed scheme only monitors weight differences in FL model updates from a targeting user and recovers its network application (APP) labels as well as a rough estimate on the traffic pattern. Assuming a slotted FL update mechanism, the proposed scheme further maps inferred labels from multiple slots to different profiling classes that depend on, e.g., QoS and APP categorization. Without loss of generality, user profiles are determined based on normalized productivity, entertainment, and casual usage scores derived from an existing commercial router and its backend server. A slot extension mechanism is further developed for more accurate profiling beyond raw traffic measurement. Evaluations conducted on seven popular APPs across three user profiles demonstrate that our approach can achieve accurate networking user profiling without invasive physical probes nor constant traffic monitoring.more » « lessFree, publicly-accessible full text available October 6, 2026
-
A critical use case of SLAM for mobile robots is to support localization during task-directed navigation. Current SLAM benchmarks overlook the importance of repeatability (precision) despite its impact on real-world deployments. TaskSLAM-Bench, a task-driven approach to SLAM benchmarking, addresses this gap. It employs precision as a key metric, accounts for SLAM’s mapping capabilities, and has easy-to-meet requirements. Simulated and real-world evaluation of SLAM methods provide insights into the navigation performance of modern visual and LiDAR SLAM solutions. The outcomes show that passive stereo SLAM precision may match that of 2D LiDAR SLAM in indoor environments. TaskSLAM-Bench complements existing benchmarks and offers richer assessment of SLAM performance in navigation-focused scenarios. Publicly available code permits in-situ SLAM testing in custom environments with properly equipped robots.more » « lessFree, publicly-accessible full text available October 25, 2026
-
This research-to-practice full paper describes a cohort-based undergraduate research program designed to improve STEM retention through structured mentoring and community building. Drawing on the Affinity Research Group (ARG) model, the program fosters faculty-student research collaboration and integrates faculty mentorship training, student-led peer mentoring, and structured interventions, such as research skills workshops and networking events. Each year, faculty from biology, chemistry, computer science, environmental science, and mathematics lead small-group research projects with recruited students who may participate for up to three years. Faculty and students receive ARG training to promote consistent mentoring practices. A credit-bearing, major-specific first-year orientation course supports recruitment and reinforces students’ scientific identity. Faculty also engage in professional development workshops to strengthen student-centered mentoring approaches. Data collection includes surveys, interviews, retention tracking, and weekly journaling to assess STEM identity, belonging, and skill development. External evaluators reviewed the faculty focus groups to assess mentoring effectiveness. Initial findings show strong faculty engagement with the ARG model, with many adopting adaptive mentoring strategies that enhance student support. Students report increased confidence and belonging within their disciplines. However, cross-disciplinary collaboration remains limited, highlighting the need for more intentional networking within the cohort. Students also emphasized the value of peer collaboration alongside faculty mentorship. These results suggest that undergraduate research can serve as a powerful tool for building community and supporting persistence in STEM. Ongoing efforts will focus on expanding networking opportunities, strengthening peer collaboration, and evaluating long-term impacts on student retention.more » « lessFree, publicly-accessible full text available November 5, 2026
-
As conventional electronic materials approach their physical limits, the application of ultrafast optical fields to access transient states of matter cap- tures imagination. The inversion symmetry governs the optical parity selection rule, differentiating between accessible and inaccessible states of matter. To circumvent parity-forbidden transitions, the common practice is to break the inversion symmetry by material design or external fields. Here we report how the application of femtosecond ultraviolet pulses can energize a parity-forbidden dark exciton state in black phosphorus while maintaining its intrinsic material symmetry. Unlike its conventional bandgap absorption in visible-to-infrared, femtosecond ultraviolet excitation turns on efficient Coulomb scattering, promoting carrier multiplication and electronic heating to ~3000 K, and consequently populating its parity-forbidden states. Interfero- metric time- and angle-resolved two-photon photoemission spectroscopy reveals dark exciton dynamics of black phosphorus on ~100 fs time scale and its anisotropic wavefunctions in energy-momentum space, illuminating its potential applications in optoelectronics and photochemistry under ultraviolet optical excitation.more » « lessFree, publicly-accessible full text available December 1, 2026
-
Free, publicly-accessible full text available September 1, 2026
-
Abstract Global environmental change is causing a decline in biodiversity with profound implications for ecosystem functioning and stability. It remains unclear how global change factors interact to influence the effects of biodiversity on ecosystem functioning and stability. Here, using data from a 24-year experiment, we investigate the impacts of nitrogen (N) addition, enriched CO2(eCO2), and their interactions on the biodiversity-ecosystem functioning relationship (complementarity effects and selection effects), the biodiversity-ecosystem stability relationship (species asynchrony and species stability), and their connections. We show that biodiversity remains positively related to both ecosystem productivity (functioning) and its stability under N addition and eCO2. However, the combination of N addition and eCO2diminishes the effects of biodiversity on complementarity and selection effects. In contrast, N addition and eCO2do not alter the relationship between biodiversity and either species asynchrony or species stability. Under ambient conditions, both complementarity and selection effects are negatively related to species asynchrony, but neither are related to species stability; these links persist under N addition and eCO2. Our study offers insights into the underlying processes that sustain functioning and stability of biodiverse ecosystems in the face of global change.more » « lessFree, publicly-accessible full text available December 1, 2026
-
Free, publicly-accessible full text available October 1, 2026
-
Mask-based integrated fluorescence microscopy is a compact imaging technique for biomedical research. It can perform snapshot 3D imaging through a thin optical mask with a scalable field of view (FOV) and a thin device thickness. Integrated microscopy uses computational algorithms for object reconstruction, but efficient reconstruction algorithms for large-scale data have been lacking. Here, we developed DeepInMiniscope, a miniaturized integrated microscope featuring a custom-designed optical mask and a multi-stage physics-informed deep learning model. This reduces the computational resource demands by orders of magnitude and facilitates fast reconstruction. Our deep learning algorithm can reconstruct object volumes over 4×6×0.6 mm3. We demonstrated substantial improvement in both reconstruction quality and speed compared to traditional methods for large-scale data. Notably, we imaged neuronal activity with near-cellular resolution in awake mouse cortex, representing a substantial leap over existing integrated microscopes. DeepInMiniscope holds great promise for scalable, large-FOV, high-speed, 3D imaging applications with compact device footprint. # DeepInMiniscope: Deep-learning-powered physics-informed integrated miniscope [https://doi.org/10.5061/dryad.6t1g1jx83](https://doi.org/10.5061/dryad.6t1g1jx83) ## Description of the data and file structure ### DeepInMiniscope: Learned Integrated Miniscope ### Datasets, models and codes for 2D and 3D sample reconstructions. Dataset for 2D reconstruction includes test data for green stained lens tissue. Input: measured images of green fluorescent stained lens tissue, dissembled into sub-FOV patches. Output: the slide containing green lens tissue features. Dataset for 3D sample reconstructions includes test data for 3D reconstruction of in-vivo mouse brain video recording. Input: Time-series standard-derivation of difference-to-local-mean weighted raw video. Output: reconstructed 4-D volumetric video containing a 3-dimensional distribution of neural activities. ## Files and variables ### Download data, code, and sample results 1. Download data `data.zip`, code `code.zip`, results `results.zip`. 2. Unzip the downloaded files and place them in the same main folder. 3. Confirm that the main folder contains three subfolders: `data`, `code`, and `results`. Inside the `data` and `code` folder, there should be subfolders for each test case. ## Data 2D_lenstissue **data_2d_lenstissue.mat:** Measured images of green fluorescent stained lens tissue, disassembled into sub-FOV patches. * **Xt:** stacked 108 FOVs of measured image, each centered at one microlens unit with 720 x 720 pixels. Data dimension in order of (batch, height, width, FOV). * **Yt:** placeholder variable for reconstructed object, each centered at corresponding microlens unit, with 180 x 180 voxels. Data dimension in order of (batch, height, width, FOV). **reconM_0308:** Trained Multi-FOV ADMM-Net model for 2D lens tissue reconstruction. **gen_lenstissue.mat:** Generated lens tissue reconstruction by running the model with code **2D_lenstissue.py** * **generated_images:** stacked 108 reconstructed FOVs of lens tissue sample by multi-FOV ADMM-Net, the assembled full sample reconstruction is shown in results/2D_lenstissue_reconstruction.png 3D_mouse **reconM_g704_z5_v4:** Trained 3D Multi-FOV ADMM-Net model for 3D sample reconstructions **t_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290.mat:** Time-series standard-deviation of difference-to-local-mean weighted raw video. * **Xts:** test video with 290 frames and each frame 6 FOVs, with 1408 x 1408 pixels per FOV. Data dimension in order of (frames, height, width, FOV). **gen_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290_v4.mat:** Generated 4D volumetric video containing 3-dimensional distribution of neural activities. * **generated_images_fu:** frame-by-frame 3D reconstruction of recorded video in uint8 format. Data dimension in order of (batch, FOV, height, width, depth). Each frame contains 6 FOVs, and each FOV has 13 reconstruction depths with 416 x 416 voxels per depth. Variables inside saved model subfolders (reconM_0308 and reconM_g704_z5_v4): * **saved_model.pb:** model computation graph including architecture and input/output definitions. * **keras_metadata.pb:** Keras metadata for the saved model, including model class, training configuration, and custom objects. * **assets:** external files for custom assets loaded during model training/inference. This folder is empty, as the model does not use custom assets. * **variables.data-00000-of-00001:** numerical values of model weights and parameters. * **variables.index:** index file that maps variable names to weight locations in .data. ## Code/software ### Set up the Python environment 1. Download and install the [Anaconda distribution](https://www.anaconda.com/download). 2. The code was tested with the following packages * python=3.9.7 * tensorflow=2.7.0 * keras=2.7.0 * matplotlib=3.4.3 * scipy=1.7.1 ## Code **2D_lenstissue.py:** Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section. **lenstissue_2D.m:** Matlab code to display the generated image and reassemble sub-FOV patches. **sup_psf.m:** Matlab script to load microlens coordinates data and to generate PSF pattern. **lenscoordinates.xls:** Microlens units coordinates table. **3D mouse.py:** Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section. **mouse_3D.m:** Matlab code to display the reconstructed neural activity video and to calculate temporal correlation. ## Access information Other publicly accessible locations of the data: * [https://github.com/Yang-Research-Laboratory/DeepInMiniscope-Learned-Integrated-Miniscope](https://github.com/Yang-Research-Laboratory/DeepInMiniscope-Learned-Integrated-Miniscope)more » « less
An official website of the United States government
